RidgeRun GStreamer Analytics Example: Synchronization Debug
Example: Remote Debug Logging for Pipeline Synchronization Issues
Synchronization issues are common in pipelines with live sources, often manifesting as dropped buffers when processing stages can’t keep up with real-time playback. This example shows how the RidgeRun GStreamer Analytics tool can be used to capture and analyze detailed logs for diagnosing this type of problem.
Pipeline Scenario
The test pipeline simulates a live video stream with intentional processing delays. These delays cause buffers to arrive late at the sink element, triggering warnings and drops:
RR_PROC_NAME=sync-test GST_DEBUG="DEBUG" GST_REMOTE_DEBUG="DEBUG" GST_TRACERS="rrlogtracer" gst-launch-1.0 videotestsrc is-live=true pattern=ball ! identity sleep-time=25000 ! identity sleep-time=25000 ! identity sleep-time=25000 ! autovideosink
The key environment variables in this setup are:
| Variable | Purpose |
|---|---|
| RR_PROC_NAME | Sets the name under which the process will appear in Grafana. |
| GST_DEBUG | Controls what is captured in the local circular log file on the device. |
| GST_REMOTE_DEBUG | Defines what logs are sent remotely to Grafana Drilldown for analysis. |
| GST_TRACERS | Enables the RidgeRun custom tracer (rrlogtracer) to capture and structure GStreamer logs.
|
Note:
Here,
GST_REMOTE_DEBUG=DEBUGensures that all debug data is captured and streamed to Grafana, with finer filtering applied later in the Drilldown interface.Alternatively, targeted filters can be defined upfront, for example:
GST_REMOTE_DEBUG="GST_PERFORMANCE:5,basesink:7"
Exploring Logs in Grafana Drilldown
Once the pipeline is running, logs are available for inspection through Grafana Drilldown.
The Drilldown interface organizes logs using a rich labeling system and provides several ways to narrow the focus to the most relevant data.
System and Process Context
Start by selecting the appropriate system in Drilldown to scope logs to the device running the pipelines.
Once the pipeline is running, open Grafana and navigate to the Drilldown panel:
- In the list of systems, select the system where the pipeline is running.
- Grafana will now display logs from that specific system.

From there, the process label can be used to focus on logs from the sync-test process specifically.

Pipeline and Category Filters
Logs can be refined even further using the pipeline label, which is especially useful on systems running multiple GStreamer pipelines simultaneously.
Two categories are particularly important for synchronization debugging:
- GST_PERFORMANCE
- Select the
GST_PERFORMANCEcategory to focus on performance-related logs. - Highlights issues like dropped buffers.
"buffer is too late"messages show when a frame was missed, including the expected render time vs. the actual pipeline clock time.- These messages are key to understanding how far the pipeline has drifted from real-time playback.
- Select the

- basesink
- Provides detailed sink timing data.
- Pair this with
Filter logs by string(with regex enabled) to isolate relevant information such as timestamps and jitter to know whether buffers arrived early or late. - This helps correlate buffer timing data with performance warnings to pinpoint the root cause.
buffer late|buffer is too late|base_time|got times|possibly|jitter

Focusing on Latency
To evaluate latency behavior, search logs for the keyword have_latency.
These entries reveal how latency is calculated and distributed across the pipeline, providing insight into whether the configured pipeline latency is sufficient for real-time playback.

Filtering by Level
Logs can also be filtered by debug level, making it easy to isolate warnings or errors.
Selecting only WARNING level messages provides a clean view of critical events such as buffer drops.

Visualizing Log Distribution
The Labels tab provides a graphical representation of how logs are distributed across systems, processes, categories, levels and other defined labels.
These visualizations make it easy to identify patterns, such as whether warnings spike at specific points in time or are tied to certain parts of the pipeline.
Filters applied in the Logs tab are also reflected here, ensuring consistent views across both modes. Click on any graph to inspect detailed data.

Filtering Strategies
While this example uses GST_REMOTE_DEBUG=DEBUG to capture all logs and apply filtering in Grafana, in many cases it’s useful to define more focused filters when running the pipeline to reduce noise.
Filtering can also be adjusted dynamically at runtime through the RidgeRun custom Grafana plugin located in the provisioned dashboards:
- Select the system and process of interest.
- Modify the
Filtered Debug Levelfield to change the filtering level without restarting the pipeline.

Closing Remarks
The synchronization case demonstrates how RidgeRun GStreamer Analytics transforms a complex debugging challenge into a manageable workflow. Instead of relying on raw log dumps and manual inspection, engineers can use Grafana Drilldown to filter, group, and visualize debug traces in real time. By aligning these logs with pipeline latency and jitter measurements, synchronization issues such as dropped or late buffers become immediately visible. This capability provides faster root-cause identification and more confidence when tuning live pipelines.